perm filename CHAP2[4,KMC]10 blob sn#062894 filedate 1973-09-20 generic text, type T, neo UTF8
00100	.SEC EXPLANATIONS AND MODELS
00200	.SS The Nature of Explanation
00300		It is perhaps as difficult to explain explanation  itself  as
00400	it is to explain anything else. (Nothing, except everything, explains
00500	anything). The explanatory practices  of  different  sciences  differ
00600	widely but they all share the purpose of someone attempting to answer
00700	someone else's (or his own)  why-how-what-etc.    questions  about  a
00800	situation,  event,  episode,  object  or phenomenon. Thus explanation
00900	implies a dialogue whose participants share some interests,  beliefs,
01000	and  values.   A  consensus  must exist about what are admissable and
01100	appropriate questions and answers.    The participants must agree  on
01200	what  is  a  sound  and  reasonable  question and what is a relevant,
01300	intelligible, and (believed) correct answer. The explainer  tries  to
01400	satisfy   a  questioner's  curiosity  by  making  comprehensible  why
01500	something is the way it is.   The answer  may  be  a  definition,  an
01600	example, a synonym, a story, a theory, a model-description, etc.  The
01700	answer attempts to satisfy curiosity by settling belief. A scientific
01800	explanation  aims  at  convergence  of  belief in the relevant expert
01900	community.
02000		Suppose a man dies and a questioner (Q) asks an explainer (E): 
02100		Q: Why  did  the  man  die?  
02200	One answer might be:
02300		E: Because he took cyanide.
02400	This explanation might be sufficient to satisfy Q's curiosity and  he
02500	and he stops asking further questions. Or he might continue:
02600	        Q. Why did the cyanide kill him?
02700	and E replies:
02800	        E: Anyone who ingests cyanide dies.
02900	This explanation appeals to a universal generalization under which is
03000	subsumed  the  particular  fact  of  this  man's  death.  Subsumptive
03100	explanations  satisfy  some  questioners  but  not  others  who,  for
03200	example,  might  want  to  know  about  the  physiological mechanisms
03300	involved.                                                            
03400	        Q: How does cyanide work in causing death?
03500	        E: It stops respiration so the person dies from lack of oxygen.
03600		If Q has biochemical interests he might inquire further: 
03700		Q:What is cyanide's mechanism of drug action on the
03800		    respiratory center?              
03900		The last two questions refers to causes. When human action is
04000	to  be  explained,  confusion  easily  arises  between  appealing  to
04100	physical, mechanical causes and appealing to symbolic-level  reasons,
04200	that  is, learned, acquired procedures or strategies which seem to be
04300	of a different ontological order. (See Toulmin, 1971).
04400		It  is  established  clinical knowledge that the phenomena of
04500	the paranoid mode can be found associated with a variety of  physical
04600	disorders.    For example, paranoid thinking can be found in patients
04700	with  head   injuries,   hyperthyroidism,   hypothyroidism,   uremia,
04800	pernicious   anemia,   cerebral  arteriosclerosis,  congestive  heart
04900	failure, malaria and epilepsy.      Also drug  intoxications  due  to
05000	alcohol,  amphetamines,  marihuana  and LSD can be accompanied by the
05100	paranoid mode. In these cases the paranoid mode is not a  first-order
05200	disorder  but  a  way  of  processing information in reaction to some
05300	other underlying disorder. To account for the association of paranoid
05400	thought  with  these  physical  states  of  illness,  a psychological
05500	theorist might be tempted to hypothesize that a  purposive  cognitive
05600	system would attempt to explain ill health by attributing it to other
05700	malevolent human agents. But before making such an explanatory  move,
05800	we must consider the at-times elusive distinction between reasons and
05900	causes in explanations of human behavior.
06000		One  view  of  the  association  of  the  paranoid  mode with
06100	physical disorders might be that the physical illness  simply  causes
06200	the  paranoia  ,through  some  unknown mechanism, at a physical level
06300	beyond the influence of deliberate self-direction  and  self-control.
06400	That  is,  the  resultant  paranoid  mode  represents  something that
06500	happens to a person as victim, not  something  that  he  does  as  an
06600	active  agent.   Mechanical causes thus provide one type of reason in
06700	explaining behavior. Another view is that the paranoid  mode  can  be
06800	explained  in terms of symbolically represented reasons consisting of
06900	rules and patterns of rules which specify an agent's  intentions  and
07000	beliefs.   In  a given situation does a person as an agent recognize,
07100	monitor and control what he is doing or trying to do?    Or  does  it
07200	just happen to him automatically without conscious deliberation?
07300		This question raises a third view, namely  that  unrecognized
07400	reasons,  aspects of the symbolic representation which are sealed off
07500	and inacessible to voluntary control, can function like causes.    If
07600	they  can  be brought to consciousness, such reasons can sometimes be
07700	modified voluntarily by the agent, as a language user, by reflexively
07800	talking to and instructing himself.  This second-order monitoring and
07900	control through language  contrasts  with  an  agent's  inability  to
08000	modify  mechanical  causes  or  symbolic reasons which lie beyond the
08100	influence of self-criticism and self-emancipation carried out through
08200	linguistically  mediated  argumentation.    Timeworn conundrums about
08300	concepts of free-will, determinism, responsibility, consciousness and
08400	the  powers  of  mental  action  here  plague  us  unless we can take
08500	advantage  of  a  computer  analogy  in  which  a  clear  and  useful
08600	distinction  is  drawn  between  levels  of  mechanical  hardware and
08700	symbolically represented programs. This important distinction will be
08800	elaborated on shortly.
08900	
09000		Each  of these three views provides a serviceable perspective
09100	depending on how a disorder is to be explained and corrected.    When
09200	paranoid  processes occur during amphetamine intoxication they can be
09300	viewed as biochemically caused and beyond the  patient's  ability  to
09400	control  volitionally through internal self-correcting dialogues with
09500	himself.  When a paranoid moment occurs in a normal person, it can be
09600	viewed  as  involving  a symbolic misinterpretation.  If the paranoid
09700	misinterpretation is recognized as unjustified, a normal  person  has
09800	the  emancipatory  power  to  revise  or  reject  it through internal
09900	debate. Between these extremes of drug-induced  paranoid  states  and
10000	the self-correctible paranoid moments of the normal person, lie cases
10100	of paranoid personalities paranoid reactions and  the  paranoid  mode
10200	associated    with    the    major   psychoses   (schizophrenic   and
10300	manic-depressive).
10400		One opinion has it that the major psychoses are a consequence
10500	of unknown  physical  causes  and  are  beyond  deliberate  voluntary
10600	control.   But  what  are we to conclude about paranoid personalities
10700	and paranoid reactions where no hardware disorder  is  detectable  or
10800	suspected?  Are  such  persons  to  be  considered  patients  to whom
10900	something is mechanically happening at the physical level or are they
11000	agents  whose  behavior  is  a  consequence  of  what  they do at the
11100	symbolic level?   Or are they both agent and patient depending on  on
11200	how  one  views  the self-modifiability of their symbolic processing?
11300	In these perplexing cases we shall take the position that in  normal,
11400	neurotic  and  characterological  paranoid  modes, the psychopathlogy
11500	represents something that happens to a man as a consequence  of  what
11600	he  has  experientially  undergone,  of  something  he  now does, and
11700	something he now undergoes.    Thus he is both agent and victim whose
11800	symbolic  processes  have  powers  to  do and liabilities to undergo.
11900	His liabilities are reflexive in  that  he  is  victim  to,  and  can
12000	succumb to, his own symbolic structures.
12100	
12200		From this standpoint I  would  postulate  a  duality  at  the
12300	symbolic  level  between  reasons  and  causes. That is, a reason can
12400	operate as an unrecognized cause in one context and be offered  as  a
12500	recognized  justification  in  another. It is, of course, not reasons
12600	themselves  which  operate  as  causes  but  the  execution  of   the
12700	reason-rules  which  serves  as  a  determinant  of  behavior.  Human
12800	symbolic behavior  is  non-determinate  to  the  extent  that  it  is
12900	self-determinate.    Thus  the power to select among alternatives, to
13000	make some decisions freely and to change one's mind is  non-illusory.
13100	When  a reason is recognized to function as a cause and is accessible
13200	to self-monitoring (the monitoring of monitoring), emancipation  from
13300	it  can occur through change or rejection of belief. In this sense an
13400	at    least    two-levelled    system    is    self-changeable    and
13500	self-emancipatory, within limits.
13600		Explanations  both  in  terms  of  causes  and reasons can be
13700	indefinitely extended and endless questions  can  be  asked  at  each
13800	level of analysis.  Just as the participants in explanatory dialogues
13900	decide what is taken to be problematic, so they  also  determine  the
14000	termini   of   questions   and  answers.   Each  discipline  has  its
14100	characteristic stopping points and boundaries.
14200		Underlying such explanatory dialogues are larger and  smaller
14300	constellations   of   concepts   which   are  taken  for  granted  as
14400	nonproblematic background.    Hence in considering the strategies  of
14500	the paranoid mode "it goes without saying" that any living teleonomic
14600	system ,as the larger constellation ,  strives  for  maintenance  and
14700	expansion  of life. Also it should go without saying that, at a lower
14800	level, ion transport takes place through nerve-cell membranes.  Every
14900	function  of  an  organism  can  be  viewed a governing a subfunction
15000	beneath and depending on a transfunction above which  calls  it  into
15100	play for a purpose.
15200		Just as there are many alternative ways of describing,  there
15300	are many alternative ways of explaining.  An explanation is geared to
15400	some  level  of  what  the  dialogue  participants  take  to  be  the
15500	fundamental  structures  and processes under consideration.  Since in
15600	psychiatry   we   cope   with   patients'   problems   using   mainly
15700	symbolic-conceptual  techniques,(it is true that the pill, the knife,
15800	and electricity are also available.), we are interested in aspects of
15900	human  conduct  which can be explained, understood, and modified at a
16000	symbol-processing  level.  Psychiatrists  need  theoretical  symbolic
16100	systems from which their clinical experience can be logically derived
16200	to interpret the case histories of their patients. Otherwise they are
16300	faced  with mountains indigestible data and dross. To quote Einstein:
16400	"Science is an attempt to make the chaotic  diversity  of  our  sense
16500	experience  correspond  to  a  logically uniform system of thought by
16600	correlating single experiences with the theoretic structure."
16700	
16800	.SS The Symbol Processing Viewpoint
16900	
17000		Segments and sequences of human behavior can be studied  from
17100	many  perspectives.   In  this  monograph  I  shall view sequences of
17200	paranoid symbolic behavior from an information processing  standpoint
17300	in  which  persons  are  viewed  as symbol users. For a more complete
17400	explication and justification of this perspective , see Newell (1973)
17500	and Newell and Simon (1972).
17600		In  brief,  from  this  standpoint  we  define information as
17700	knowledge in a symbolic code. Symbols are defined as  representations
17800	of   experience   classified   as  objects,  events,  situations  and
17900	relations. A  symbolic  process  is  a  symbol-manipulating  activity
18000	posited   to   account  for  observable  symbolic  behavior  such  as
18100	linguistic interaction. Under the term "symbol-processing" I  include
18200	the seeking, manipulating and generating of symbols.
18300		Symbol-processing   explanations   postulate   an  underlying
18400	structure  of  hypothetical  processes,  functions,  strategies,   or
18500	directed  symbol-manipulating procedures, having the power to produce
18600	and being responsible for observable patterns of  phenomena.  Such  a
18700	structure  offers an ethogenic (ethos = conduct or character, genic =
18800	generating)  explanation  for  sequences  or  segments  of   symbolic
18900	behavior.  (See Harre and Secord,1972).  From an ethogenic viewpoint,
19000	we can posit processes, functions, procedures and strategies as being
19100	responsible  for  and  having  the  power  to  generate  the symbolic
19200	patterns  and  sequences  characteristic  of   the   paranoid   mode.
19300	"Strategies"  is  perhaps the best general term since it implies ways
19400	of obtaining an objective - ways which have suppleness and pliability
19500	since choice of application depends on circumstances.         However
19600	I shall use all these terms interchangeably.
19700	
19800	.SS Symbolic Models
19900		Theories and  models  share  many  functions  and  are  often
20000	considered  equivalent.   One  important distinction lies in the fact
20100	that a theory states a subject has a certain structure but  does  not
20200	exhibit  that  structure in itself. (See Kaplan,1964). In the case of
20300	computer simulation models there exists a further useful distinction.
20400	Computer  simulation  models  which  have  the ability to converse in
20500	natural language using teletypes, actualize or realize  a  theory  in
20600	the  form of a dialogue algorithm. In contrast to a verbal, pictorial
20700	or  mathematical  representation,  such  a  model,  as  a  result  of
20800	interaction,  changes  its  states  over  time and ends up in a state
20900	different from its initial state.
21000		Einstein  once remarked, in contrasting the act of description with what
21100	is described, that it is not the function  of  science  to  give  the
21200	taste  of the soup. Today this view would be considered unnecessarily
21300	restrictive. For example, a  major  test  for  synthetic  insulin  is
21400	whether  it  reproduces  the effects, or at least some of the effects
21500	(such as lowering blood sugar), shown by natural  insulin.   To  test
21600	whether a simulation is successful, its effects must be compared with
21700	the effects produced by the naturally-occuring subject-process  being
21800	modelled.     An  interactive  simulation  model  which  attempts  to
21900	reproduce sequences of experienceable reality, offers an  interviewer
22000	a  first-hand  experience  with a concrete case.    In constructing a
22100	computer simulation, a theory is modelled to discover a  sufficiently
22200	rich   structure  of  hypotheses  and  assumptions  to  generate  the
22300	observable subject-behavior  under  study.     A  dialogue  algorithm
22400	allows an observer to interact with a concrete specimen of a class in
22500	detail. In the case of our model, the level of detail is the level of
22600	the  symbolic  behavior  of  conversational  language.  This level is
22700	satisfying to a clinician since he can compare the  model's  behavior
22800	with its natural human counterparts using familiar skills of clinical
22900	dialogue. Communicating with the paranoid model by means of teletype,
23000	an  interviewer  can  directly experience for himself a sample of the
23100	type of impaired social relationship which develops with  someone  in
23200	paranoid mode.
23300		An algorithm composed of  symbolic  computational  procedures
23400	converts  input  symbolic  structures into output symbolic structures
23500	according to certain  principles.   The  modus  operandi  of  such  a
23600	symbolic  model  is simply the workings of an algorithm when run on a
23700	computer. At this level of explanation, to  answer  `why?'  means  to
23800	provide  an  algorithm  which  makes explicit how symbolic structures
23900	collaborate, interplay  and  interlock  -  in  short,  how  they  are
24000	organized to generate patterns of manifest phenomena.
24100	
24200		To simulate the sequential input-output behavior of a  system
24300	using  symbolic  computational  procedures,  one writes an alogorithm
24400	which, when run on a computer, produces symbolic behavior  resembling
24500	that  of  the  subject  system  being  simulated.    (Colby,1973) The
24600	resemblance is achieved through the  workings  of  an  inner  posited
24700	structure   in   the   form  of  an  algorithm,  an  organization  of
24800	symbol-manipulating procedures which  are  ethogenically  responsible
24900	for the characteristic observable behavior at the input-output level.
25000	Since we do not know the structure of the "real" simulative processes
25100	used  by  the mind-brain, our posited structure stands as an imagined
25200	theoretical  analogue,  a  possible  and  plausible  organization  of
25300	processes  analogous  to  the  unknown  processes  and  serving as an
25400	attempt to explain  the  workings  of  the  system  under  study.   A
25500	simulation  model  is  thus  deeper than a pure black-box explanation
25600	because it postulates functionally equivalent  processes  inside  the
25700	box  to  account  for  outwardly observable patterns of behavior.   A
25800	simulation model constitutes an interpretive explanation in  that  it
25900	makes  intelligible  the connections between external input, internal
26000	states  and  output   by   positing   intervening   symbol-processing
26100	procedures  operating  between symbolic input and symbolic output. To
26200	be illuminating, a description of the model should make clear why and
26300	how it reacts as it does under various circumstances.
26400		Citing a universal generalization to explain an  individual's
26500	behavior  is unsatisfactory to a questioner who is interested in what
26600	powers and liabilities are latent behind manifest phenomena.  To  say
26700	"x is nasty because x is paranoid and all paranoids are nasty" may be
26800	relevant, intelligible and correct. But another type  of  explanation
26900	is  possible:  a model-explanation referring to a structure which can
27000	account for "nasty" behavior as a consequence of input  and  internal
27100	states  of  a  system.    A  model  explanation  specifies particular
27200	antecedants and processes  through  which  antecedants  generate  the
27300	phenomena.   An ethogenic approach to explanation assumes perceptible
27400	phenomena display the regularities and nonrandom irregularities  they
27500	do  because  of  the  nature  of  an  imperceptible  and inaccessible
27600	underlying structure.    The  posited  theoretical  structure  is  an
27700	idealization,  unobservable  in  human  heads,  not because it is too
27800	small, but because it is an imaginary analogue  to  the  inaccessible
27900	structure.
28000		When attempts are made to explain human behavior,  principles
28100	in  addition  to  those accounting for the natural order are invoked.
28200	"Nature entertains no opinions about us", said Nietzsche.  But  human
28300	natures  do  ,  and  therein  lies  a  source  of  complexity for the
28400	understanding of human conduct. Until the first quarter of  the  20th
28500	century,  natural  sciences  were  guided  by  the Newtonian ideal of
28600	perfect process knowledge  about  inanimate  objects  whose  behavior
28700	could  be  subsumed  under lawlike generalizations.  When a deviation
28800	from a law was  noticed,  it  was  the  law  which  was  subsequently
28900	modified, since by definition physical objects did not have the power
29000	to break laws. When the planet Mercury was observed to  deviate  from
29100	the orbit predicted by Newtonian theory, no one accused the planet of
29200	being an intentional agent disobeying a law. Instead it was suspected
29300	that something was incorrect about the theory.
29400		Subsumptive explanation is the acceptable norm in many fields
29500	but  it is seldom satisfactory in accounting for particular sequences
29600	of behavior in living purposive systems.  When physical  bodies  fall
29700	in  the  macroscopic world, few find it scientifically useful to post
29800	that bodies have an intention to fall .  But in the  case  of  living
29900	systems,  especially  ourselves,  our  ideal  explanatory practice is
30000	teleonomically Aristotelian in utilizing a concept of intention.
30100		Consider  a  man participating in a high-diving contest.   In
30200	falling towards the water he accelerates at the rate of 32  feet  per
30300	second. Viewing the man simply as a falling body, we explain his rate
30400	of fall by appealing to a physical law.  Viewing the man as  a  human
30500	intentionalistic  agent,  we  explain  his  dive  as the result of an
30600	intention to dive in a cetain way in order to win the diving contest.
30700	His  conduct  (in  contrast  to  mere  movement) involves an intended
30800	following of certain conventional rules for what is judged by  humans
30900	to  constitute, say, a swan dive. Suppose part-way down he chooses to
31000	change his position in mid-air and enter the water thumbing his  nose
31100	at the judges. He cannot disobey the law of falling bodies but he can
31200	disobey or ignore the rules of diving. He can  also  make  a  gesture
31300	which  expresses disrespect and which he believes will be interpreted
31400	as such by the onlookers.   Our diver breaks a rule  for  diving  but
31500	follows  another  rule which prescribes gestural action for insulting
31600	behavior.   To explain the actions of diving  and  nose-thumbing,  we
31700	would  appeal,  not  to  laws  of natural order, but to an additional
31800	order, to principles of human order. This order  is  superimposed  on
31900	laws  of  natural  order  and  takes  into  account  (1)standards  of
32000	appropriate action in certain situations and (2)  the  agent's  inner
32100	considerations   of  intention,  belief  and  value  which  he  finds
32200	compelling from his point of view. In this type  of  explanation  the
32300	explanandum,  that  which is being explained, is the agent's informed
32400	actions, not simply his movements. When a  human  agent  performs  an
32500	action in a situation, we can ask:  is the action appropriate to that
32600	situation and if not, why did the agent  believe  his  action  to  be
32700	called for?
32800		Symbol-processing  explanations  of  human  conduct  rely  on
32900	concepts  of  intention, belief, action, affect, etc. These terms are
33000	close to the terms of ordinary language as is characteristic of early
33100	stages  of explanations. It is also important to note that such terms
33200	are commonly utilized in describing computer algorithms which  strive
33300	to  achieve  goals.   In  an  algorithm  these  ordinary terms can be
33400	explicitly defined and represented.
33500		Psychiatry deals with the practical concerns of inappropriate
33600	action, belief, etc. on the part of a patient. His  behavior  may  be
33700	inappropriate  to  onlookers  since  it  represents  a lapse from the
33800	expected, a contravention of the human order. It may even appear this
33900	way  to  the  patient  in  monitoring  and  directing  himself.   But
34000	sometimes, as in severe cases of the  paranoid  mode,  the  patient's
34100	behavior  does  not  appear  anomalous to himself.  He maintains that
34200	anyone  who  understands  his  point  of  view,  who   conceptualizes
34300	situations  as  he  does  from the inside, would consider his outward
34400	behavior appropriate and justified. What he does  not  understand  or
34500	accept is that his inner conceptualization is mistaken and represents
34600	a misinterpretation of the events of his experience.
34700		The  model  to  be  presented  in  the  sequel constitutes an
34800	attempt to explain some regularities and  particular  occurrences  of
34900	symbolic   (conversational)   paranoid  behavior  observable  in  the
35000	clinical situation of a psychiatric interview.   The  explanation  is
35100	at the symbol-processing level of linguistically communicating agents
35200	and  is  cast  in  the  form  of  a  dialogue  algorithm.   Like  all
35300	explanations,  it  is  tentative,  incomplete,  and does not claim to
35400	represent the only conceivable structure of processes .
35500	
35600	The Nature of Algorithms
35700	
35800		Theories  can  be  presented  in various forms: prose essays,
35900	mathematical  equations  and  computer  programs.    To   date   most
36000	theoretical  explanations in psychiatry and psychology have consisted
36100	of natural language essays with all their  well-known  vagueness  and
36200	ambiguities.  Many  of  these  formulations have been untestable, not
36300	because relevant observations were lacking but because it was unclear
36400	what  the  essay  was really saying.  Clarity is needed.  Science may
36500	begin with metaphors but it should end up with algorithms.
36600		An  alternative  way of formulating psychological theories is
36700	now available in the form of symbol-processing  algorithms,  computer
36800	programs,   which   have  the  virtue  of  being  explicit  in  their
36900	articulation and which can be run on  a  computer  to  test  internal
37000	consistency and external correspondence with the data of observation.
37100	The subject-matter or subject of a model is what it is  a  model  of;
37200	the  source of a model is what it is based upon. Since we do not know
37300	the "real" algorithms used by  people,  we  construct  a  theoretical
37400	model,  based  upon  computer  algorithms.  This  model  represents a
37500	partial analogy. (Harre, 1970).   The analogy is made at the  symbol-
37600	processing  level,  not  at  the  hardware  level.      A functional,
37700	computational or procedural equivalence is being  postulated.     The
37800	question   then  becomes  one  of  categorizing  the  extent  of  the
37900	equivalence.         A  beginning  (first-approximation)   functional
38000	equivalence  might be defined as indistinguishability at the level of
38100	observable  I-O  pairs.  A  stronger  equivalence  would  consist  of
38200	indistinguishability  at  inner  I-O levels.  That is, there exists a
38300	correspondence between what is being done and how it is being done at
38400	a given operational level.
38500		An algorithm represents an organization of  symbol-processing
38600	strategies or functions which represent an "effective procedure".  An
38700	effective procedure consists of three compoments:
38800	.V
38900		(1) A programming language in which procedural rules of
39000		    behavior can be rigorously and unambiguously specified.
39100		(2) An organization of procedural rules which constitute 
39200		    the algorithm.
39300		(3) A machine processor which can rapidly and reliably carry
39400		    out the processes specified by the procedural rules.
39500	.END
39600	The   specifications   of   (2),  written  in  the  formally  defined
39700	programming language of  (1),  is  termed  an  algorithm  or  program
39800	whereas  (3)  involves  a computer as the machine processor, a set of
39900	deterministic physical mechanisms which can  perform  the  operations
40000	specified  in  the  algorithm.  The  algorithm  is called `effective'
40100	because it actually works, performing as intended  when  run  on  the
40200	machine processor.
40300		A simulation model is composed  of  procedures  taken  to  be
40400	analogous  to  the  imperceptible and inaccessible procedures.     We
40500	are not claiming they ARE analogous,  we  are  MAKING  them  so.  The
40600	analogy  being  drawn  here  is between specified processes and their
40700	generating  systems.  Thus,  in   comparing   mental   processes   to
40800	computational processes, we might assert:
40900	
41000	.V
41100	      mental process    computational process
41200	      --------------:: ----------------------
41300	      brain hardware      computer hardware and
41400	      and programs           programs
41500	.END
41600	
41700		Many of the  classiclal  mind-brain  problems  arose  because
41800	there  did  not  exist  a  familiar,  well-understood analogy to help
41900	people imagine how a system could  work  having  a  clear  separation
42000	between its hardware descriptions and its program descriptions.  With
42100	the advent of computers and  programs  some  mind-brain  perplexities
42200	disappear.  (Colby,1971).  The analogy is not simply between computer
42300	hardware and brain wetware.  We are not comparing  the  structure  of
42400	neurons  with  the  structure  of  transistors;  we are comparing the
42500	organization of symbol-processing procedures  in  an  algorithm  with
42600	symbol-processing  procedures of the mind-brain.  The central nervous
42700	system contains a representation of the experience of its holder.   A
42800	model  builder has a conceptual representation of that representation
42900	which he demonstrates in the form of a model.  Thus the  model  is  a
43000	demonstration of a representation of a representation.
43100		An  algorithm  can  be  run  on  a  computer  in two forms, a
43200	compiled version and an interpreted version. In the compiled  version
43300	a  preliminary  translation  has  been  made  from  the  higher-level
43400	programming  language  (source  language)  into  lower-level  machine
43500	language  (object  language)  which  controls  the  on-off  state  of
43600	hardware switching devices. When the compiled  version  is  run,  the
43700	instructions  of  the machine-language code are directly executed. In
43800	the interpreted version each high-level language instruction is first
43900	translated  into  machine language, executed, and then the process is
44000	repeated with the next instruction.   One  important  aspect  of  the
44100	distinction  bewteen  compiled  and  interpreted versions is that the
44200	compiled version, now written in  machine  language,  is  not  easily
44300	accessible  to  change  using  the higher-level language. In order to
44400	change the program, the interpreted version must be modified  in  the
44500	source  language  and then re-compiled into the object language.  The
44600	rough analogy with ever-changing  human  symbolic  behavior  lies  in
44700	suggesting  that  modifications require change at the source-language
44800	level. Otherwise compiled algorithms are inaccessible to second order
44900	monitoring and modification.
45000		Since we are taking running computer programs as a source  of
45100	analogy for a paranoid model, logical errors or pathological behavior
45200	on  the  part   of   such   programs   are   of   interest   to   the
45300	psychopathologist.   These  errors  can  be  ascribed to the hardware
45400	level, to the interpreter or to the programs  which  the  interpreter
45500	executes.    Different  remedies are required at different levels. If
45600	the analogy  is  to  be  clinically  useful  in  the  case  of  human
45700	pathological  behavior,  it  will  become  a  matter  of  influencing
45800	symbolic behavior with the appropriate techniques.
45900		Since  the algorithm is written in a programming language, it
46000	is hermetic except to a few people,  who  in  general  do  not  enjoy
46100	reading   other  people's  code.     Hence  the  intelligibility  and
46200	scrutability requirement for explanations must be met in other  ways.
46300	In  an attempt to open the algorithm to scrutiny I shall describe the
46400	model in detail using diagrams and interview examples profusely.
46500	
46600	
46700	Analogy
46800	
46900		I  have  stated  that  an  interactive  simulation  model  of
47000	symbol-manipulating  processes  reproduces  sequences   of   symbolic
47100	behavior  at the level of linguistic communication.  The reproduction
47200	is achieved through the operations of an algorithm consisting  of  an
47300	organization   of   hypothetical   symbol-processing   strategies  or
47400	procedures which can  generate  the  I-O  behavior  of  the  subject-
47500	processes   under   investigation.The   algorithm  is  an  "effective
47600	procedure" in the sense it really works in the manner intended by the
47700	model-builders.  In the model to be described, the paranoid algorithm
47800	generates  linguistic  I-O  behavior  typical   of   patients   whose
47900	symbol-processing  is dominated by the paranoid mode. Comparisons can
48000	be made between samples of the I-O behaviors of patients  and  model.
48100	But  the  analogy is not to be drawn at this level.   Mynah birds and
48200	tape recorders also reproduce human linguistic behavior  but  no  one
48300	believes  the  reproduction  is achieved by powers analogous to human
48400	powers.   Given that the manifest outermost I-O behavior of the model
48500	is  indistinguishable  from  the  manifest  outward  I-O  behavior of
48600	paranoid patients, does this imply that the  hypothetical  underlying
48700	processes  used  by  the  model are analogous to (or perhaps the same
48800	as?) the underlying processes used by persons in the  paranoid  mode?
48900	This deep and far-reaching question should be approached with caution
49000	and only when we are  first  armed  with  some  clear  notions  about
49100	analogy,  similarity, faithful reproduction, indistinguishability and
49200	functional equivalence.
49300		In comparing two things (objects, systems or processes )  one
49400	can   cite   properties  they  have  in  common (positive  analogy),
49500	properties they do not share (negative analogy) and properties  which
49600	we  do  not  yet  know whether they are positive or negative (neutral
49700	analogy). (See Hesse,1966). No two things are exactly alike in  every
49800	detail.   If  they  were identical in respect to all their properties
49900	then they would be copies. If they were identical  in  every  respect
50000	including  their  spatio-temporal  location we would say we have only
50100	one thing instead of two. Everything  resembles  something  else  and
50200	maybe everything else, depending upon how one cites properties.
50300		In an analogy a similarity relation is  evoked.  "Newton  did
50400	not  show  the  cause of the apple falling but he showed a similitude
50500	between the apple and the stars."(D`Arcy Thompson). Huygens suggested
50600	an analogy between sound waves and light waves in order to understand
50700	something less well-understood (light) in terms of  something  better
50800	understood   (sound).   To  account  for  species  variation,  Darwin
50900	postulated a  process  of  natural  selection.    He  constructed  an
51000	analogy  from two sources, one from artificial selection as practiced
51100	by domestic breeders of animals and one from  Malthus'  theory  of  a
51200	competition  for  existence  in a population increasing geometrically
51300	while its resources increase arithmetically. Bohr's model of the atom
51400	offered  an  analogy  between solar system and atom. These well-known
51500	historical examples should be sufficient here to illustrate the  role
51600	of analogies in theory construction.    Analogies are made in respect
51700	to  those  properties  which  constitute  the  positive  and  neutral
51800	analogy.     The  negative analogy is ignored.   Thus Bohr's model of
51900	the atom as a miniature planetary system was not intended to  suggest
52000	that  electrons  possessed  color or that planets jumped out of their
52100	orbits. 
52200	
52300	Functional Equivalence
52400	
52500		When human symbolic processes are the subject of a simulation
52600	model, we draw the analogy from two sources, symbolic computation and
52700	psychology.  The  analogy  made  is between systems known to have the
52800	power to process symbols, namely,  persons  and  computers.       The
52900	properties  compared  in  the  analogy  are obviously not physical or
53000	substantive such as blood and wires, but functional  and  procedural.
53100	We  want  to  assume  that not-well-understood mental procedures in a
53200	person are similar to  the  more  accessible  and  better  understood
53300	procedures of symbol-processing which take place in a computer.   The
53400	analogy is one  of  functional  or  procedural  equivalence.  (For  a
53500	further    account   of   functional   analysis   see   Hempel,1965).
53600	Mousetraps are functionally equivalent.    There exists a  large  set
53700	of  physical  mechanisms for catching mice. The term "mousetrap" says
53800	what each member the set has in common.  Each takes as input  a  live
53900	mouse  and  yields  as output a dead one. Systems equivalent from one
54000	point of view may not be equivalent from another (Fodor,1968).
54100		If  model  and  human  are  indistinguishable at the manifest
54200	level of linguistic I-O pairs, then they can be considered equivalent
54300	at  that  level.      If they can be shown to be indistinguishable at
54400	more internal symbolic levels, then a stronger exists. How  stringent
54500	and  how  extensive  are  the  demands for equivalence to be?    Must
54600	there be point-to-point correspondences at every level?   What is  to
54700	count as a point and what are the levels? Procedures can be specified
54800	and ostensively pointed to in an algorithm, but how can we  point  to
54900	unobservable  symbolic  processes  in  a person's head?   There is an
55000	inevitable limit to scrutinizing the "underlying"  processes  of  the
55100	world.   Einstein  likened  this  situation  to  a man explaining the
55200	behavior of a watch without opening it: "He will  never  be  able  to
55300	compare  his  picture  with  the  real  mechanism  and he cannot even
55400	imagine the possibility or meaning of such a comparison".
55500		In  constructing  an   algorithm   one   puts   together   an
55600	organization  of  collaborating  functions or procedures.  A function
55700	takes some symbolic structure  as  input  and  yields  some  symbolic
55800	structure as output. Two computationally equivalent functions, having
55900	the same input and yielding the same output, can differ `inside'  the
56000	function at the instruction level.
56100		Consider  an elementary programming problem which students in
56200	symbolic computation are often asked to solve.  Given  a  list  L  of
56300	symbols,  L=(A  B  C  D), as input, construct a function or procedure
56400	which will convert this list to the list RL in which the order of the
56500	symbols  is  reversed,  i.e.   RL=(D  C B A).  There are many ways of
56600	solving this problem and the code of one student may  differ  greatly
56700	from that of another at the level of individual instructions. But the
56800	differences of such details are irrelevant. What  is  significant  is
56900	that  the  solutions  make  the required conversion from L to RL. The
57000	correct solutions will  all  be  computationally  equivalent  at  the
57100	input-output  level  since  they take the same symbolic structures as
57200	input and produce the same symbolic output.
57300		If  we  propose  that  an  algorithm  we  have constructed is
57400	functionally equivalent to what goes on in humans when  they  process
57500	symbolic   structures,   how   can   we   justify   this  position  ?
57600	Indistinguishability tests at,  say,  the  linguistic  level  provide
57700	evidence  only for beginning equivalence. We would like to be able to
57800	have access to the underlying processes in humans the way we can with
57900	algorithms.  (Admittedly, we do not directly observe processes at all
58000	levels but only  the  products  of  some).  The  difficulty  lies  in
58100	identifying,  making  accessible,  and  counting  processes  in human
58200	heads.    Many symbol-processing experiments are now  being  designed
58300	and  carried  out.  We  must  have  great  patience with this type of
58400	experimental information-processing psychology.
58500		In  the meantime, besides first-approximation I-O equivalence
58600	and plausibility arguments,  one  might  appeal  to  extra-evidential
58700	support  offering  parallelisms  from neighboring scientific domains.
58800	One can offer analogies between what is known to go on at a molecular
58900	level  in  the  cells  of  living  organisms  and  what goes on in an
59000	algorithm. For example, a DNA molecule  in  the  nucleus  of  a  cell
59100	consists  of an ordered sequence (list) of nucleotide bases (symbols)
59200	coded in triplets termed codons (words). Each element  of  the  codon
59300	specifies  which  amino acid during protein synthesis is to be linked
59400	into the chain of polypeptides making up the  protein.    The  codons
59500	function like instructions in a programming language. Some codons are
59600	known to operate as terminal  symbols  analogous  to  symbols  in  an
59700	algorithm  which  terminate  the  end of a list. If, as a result of a
59800	mutation, a nucleotide base is changed, the usual protein will not be
59900	synthesized.  The  polypeptide  chain  resulting  may  have lethal or
60000	trivial consequences for the  organism  depending  on  what  must  be
60100	passed  on to other processes which require polypeptides to be handed
60200	over to them. Similarly in an algorithm. If a symbol  or  word  in  a
60300	procedure  is incorrect, the procedure cannot operate in its intended
60400	manner.   Such a result may be lethal or  trivial  to  the  algorithm
60500	depending  on  what  information the faulty procedure must pass on at
60600	its interface with other procedures in the overall organization. Each
60700	procedure   in  an  algorithm  is  embedded  in  an  organization  of
60800	collaborating procedures just as are functions in  living  organisms.
60900	We  know that at the molecular level of living organisms there exists
61000	a process such as serial progression  along  a  nucleotide  sequence,
61100	which is analogous to stepping down a list in an algorithm.   Further
61200	analogies can be made between point mutations in which DNA bases  can
61300	be   inserted,   deleted,   substituted  or  reordered  and  symbolic
61400	computation in which the same operations are commonly carried out  on
61500	symbolic    structures.     Such   analogies   are   interesting   as
61600	extra-evidential support but obviously  closer  linkages  are  needed
61700	between  the macro-level of symbolic processes and the micro-level of
61800	molecular information-processing within cells.
62000		To  obtain  evidence for the acceptability of a model as true
62100	or authentic, empirical tests are utilized as validation  procedures.
62200	Such  tests  should  also tell us which is the best among alternative
62300	versions of a family of models and, indeed among alternative families
62400	of  models.  Scientific explanations do not stand alone in isolation.
62500	They are evaluated relative to rival contenders for the  position  of
62600	"best  available".  Once  we  accept  a  theory  or model as the best
62700	available, can we be sure it is correct or true?    We can never know
62800	with certainty. Theories and models are provisional approximations to
62900	nature destined to become superseded by better ones.